Goto

Collaborating Authors

 vy process





Score-based Generative Models with Lévy Processes

Neural Information Processing Systems

Investigating the optimal stochastic process beyond Gaussian for noise injection in a score-based generative model remains an open question. Brownian motion is a light-tailed process with continuous paths, which leads to a slow convergence rate for the Number of Function Evaluation (NFE). Recent studies have shown that diffusion models suffer from mode-collapse issues on imbalanced data.In order to overcome the limitations of Brownian motion, we introduce a novel score-based generative model referred to as Lévy-Itō Model (LIM).


Score-based Metropolis-Hastings for Fractional Langevin Algorithms

Aloui, Ahmed, Liao, Junyi, Hasan, Ali, Blanchet, Jose, Tarokh, Vahid

arXiv.org Machine Learning

Sampling from heavy-tailed and multimodal distributions is challenging when neither the target density nor the proposal density can be evaluated, as in $α$-stable Lévy-driven fractional Langevin algorithms. While the target distribution can be estimated from data via score-based or energy-based models, the $α$-stable proposal density and its score are generally unavailable, rendering classical density-based Metropolis--Hastings (MH) corrections impractical. Consequently, existing fractional Langevin methods operate in an unadjusted regime and can exhibit substantial finite-time errors and poor empirical control of tail behavior. We introduce the Metropolis-Adjusted Fractional Langevin Algorithm (MAFLA), an MH-inspired, fully score-based correction mechanism. MAFLA employs designed proxies for fractional proposal score gradients under isotropic symmetric $α$-stable noise and learns an acceptance function via Score Balance Matching. We empirically illustrate the strong performance of MAFLA on a series of tasks including combinatorial optimization problems where the method significantly improves finite time sampling accuracy over unadjusted fractional Langevin dynamics.




Expected Signature Kernels for Lévy Rough Paths

Friz, Peter K., Hager, Paul P.

arXiv.org Machine Learning

The expected signature kernel arises in statistical learning tasks as a similarity measure of probability measures on path space. Computing this kernel for known classes of stochastic processes is an important problem that, in particular, can help reduce computational costs. Building on the representation of the expected signature of (inhomogeneous) Lévy processes with absolutely continuous characteristics as the development of an absolutely continuous path in the extended tensor algebra [F.-H.-Tapia, Forum of Mathematics: Sigma (2022), "Unified signature cumulants and generalized Magnus expansions"], we extend the arguments developed for smooth rough paths in [Lemercier-Lyons-Salvi, "Log-PDE Methods for Rough Signature Kernels"] to derive a PDE system for the expected signature of inhomogeneous Lévy processes. As a specific example, we see that the expected signature kernel of Gaussian martingales satisfies a Goursat PDE.



Generative modelling with jump-diffusions

Baule, Adrian

arXiv.org Machine Learning

Score-based diffusion models generate samples from an unknown target distribution using a time-reversed diffusion process. While such models represent state-of-the-art approaches in industrial applications such as artificial image generation, it has recently been noted that their performance can be further improved by considering injection noise with heavy tailed characteristics. Here, I present a generalization of generative diffusion processes to a wide class of non-Gaussian noise processes. I consider forward processes driven by standard Gaussian noise with super-imposed Poisson jumps representing a finite activity Levy process. The generative process is shown to be governed by a generalized score function that depends on the jump amplitude distribution. Both probability flow ODE and SDE formulations are derived using basic technical effort, and are implemented for jump amplitudes drawn from a multivariate Laplace distribution. Remarkably, for the problem of capturing a heavy-tailed target distribution, the jump-diffusion Laplace model outperforms models driven by alpha-stable noise despite not containing any heavy-tailed characteristics. The framework can be readily applied to other jump statistics that could further improve on the performance of standard diffusion models.